Search Results: "dave"

4 January 2015

Russ Allbery: Review: Programming Ruby

Review: Programming Ruby, by Dave Thomas, et al.
Publisher: Pragmatic Bookshelf
Copyright: 2005
Printing: May 2006
ISBN: 0-9745140-5-5
Format: Trade paperback
Pages: 785
There are a few different editions of this book. The version I read is the second edition, written by Dave Thomas with Chad Fowler and Andy Hunt and published in 2005, covering Ruby 1.8.2. There's now a fourth edition, covering Ruby 1.9 and 2.0, which is probably what you'd want if you were buying this book today. This book, in whatever edition, is called the Pickaxe in the Ruby community after its cover. I've used a lot of different programming languages, so I can usually pick one up on the fly reasonably well, but I still like to read a good introductory book before using one seriously. It's a bit too easy to get lost or to fall into habits that don't match the best practices of the language community without a solid introduction. I've been using a bit of Ruby off and on since I started using Puppet, but I'm looking at doing more serious development using Chef, so I decided it was time to get that introduction. (It helped that I had this book sitting around, although that's also why I read an older edition.) Programming Ruby starts with the obligatory introduction to installing and running Ruby, and then provides a high-level introduction to the language and its basic types just enough to make Ruby comprehensible before starting into the object system. Everything is an object in Ruby, so the book introduces the object system as early as possible, and then shows the rest of the language from constants up in the light of that object system. The rest of part one follows the normal language introduction path, building up from constants and methods to exceptions, modules, and basic IO. It closes with chapters about threads and processes, unit testing, and the debugger. Part two is a grab-bag of one-chapter topics describing how to use Ruby in a particular setting, or showing one angle of the language. The best of those chapters for me was the one on RDoc, partly because I'm quite impressed by Ruby's documentation system. A few of these chapters are oddly in-depth for an introductory book I doubt I'm ever going to use all the details about special irb configuration, and if I do, I'd just look them up but I greatly appreciated the solid chapter on how to write Ruby extensions in C. There is also the obligatory chapter on writing GUI applications with Tk, which always seems to show up in these sorts of introductions and which always baffles me. Does anyone actually do this any more instead of writing a web application? Part three dives back into the language and provides a more complete and formal description. The authors aren't afraid to get into some of the internals, which I appreciated. There is a good chapter here on the details of the type system and how objects and classes interact, and a much-needed extended discussion of duck typing. This type of weak typing and runtime binding is fundamental to how Ruby approaches objects, for better or worse. (I have mixed opinions; it makes some things easier, but I increasingly appreciate strong typing and more formal interface definitions.) Some discussion of marshalling and introspection closes out the discussion portion of the book. That's about 420 pages of the material. The rest of the book is a detailed reference on all of the core classes, and a quicker overview of the standard library. Normally, this sort of thing is thrown into language introductions to pad out the page count, but usually the language's official documentation is better at this sort of reference. But I found Programming Ruby to be an exception. The reference is succinct, sticking to a paragraph or two for each method, and did a great job of providing enough cross-reference and discussion to put each class into a broader perspective. It's the most useful example of this type of reference section I've seen. I still probably won't use it after this initial reading, but I think I got a better feel for the language from reading through it. It's hard to review a book like this without reviewing the language it documents, at least a little bit. I'll indulge: it entertains me how much Ruby is obviously based on Perl, including borrowing some of Perl's more dubious ideas. The global punctuation variables will look familiar to any Perl programmer, and the oddly-named global variables for the interpreter flags are in the same spirit. The language unfortunately has similar problems as Perl with safely running commands without using the shell; it's possible, but not the default and not what the built-ins do. There are places where I wish Ruby were a little less like Perl. The plus side for an experienced Perl programmer is that Ruby feels quite familiar and has made some clear improvements. The ? and ! convention for methods that return booleans or modify objects in-place is brilliant in its simplicity, and something I'd love to see in more languages. And the way Ruby implements ubiquitous code blocks for both iterators and for any temporary objects is lovely once one gets used to it. It's similar to Python's context managers, except more general and built deeper into the language. Returning to the review of the book, rather than the topic, Programming Ruby has a good, clear explanation of blocks, iterators, and yield. If you're interested in getting a grounding in Ruby, this book still feels like a solid introduction. The edition I read is getting a bit long in the tooth now that we're on Ruby 2.1, but the pace of language change has slowed, and most of the book is still applicable. (If you're buying it new, you should, of course, get the later edition.) The table of contents makes it seem like the book is covering the same ground multiple times, but that organizational strategy worked better than I expected. Ruby is not the most organized language in the world, so I still felt a bit overwhelmed with random method names in places, but I never felt lost in the mechanics of the language. In short, recommended if you want a good introduction to the language, although probably in a later edition. Rating: 8 out of 10

17 December 2014

Keith Packard: MST-monitors

Multi-Stream Transport 4k Monitors and X I'm sure you've seen a 4k monitor on a friends desk running Mac OS X or Windows and are all ready to go get one so that you can use it under Linux. Once you've managed to acquire one, I'm afraid you'll discover that when you plug it in, you're limited to 30Hz refresh rates at the full size, unless you're running a kernel that is version 3.17 or later. And then... Good Grief! What Is My Computer Doing! Ok, so now you're running version 3.17 and when X starts up, it's like you're using a gigantic version of Google Cardboard. Two copies of a very tall, but very narrow screen greets you. Welcome to MST island. In order to drive these giant new panels at full speed, there isn't enough bandwidth in the display hardware to individually paint each pixel once during each frame. So, like all good hardware engineers, they invented a clever hack. This clever hack paints the screen in parallel. I'm assuming that they've got two bits of display hardware, each one hooked up to half of the monitor. Now, each paints only half of the pixels, avoiding costly redesign of expensive silicon, at least that's my surmise. In the olden days, if you did this, you'd end up running two monitor cables to your computer, and potentially even having two video cards. Today, thanks to the magic of Display Port Multi-Stream Transport, we don't need all of that; instead, MST allows us to pack multiple cables-worth of data into a single cable. I doubt the inventors of MST intended it to be used to split a single LCD panel into multiple "monitors", but hardware engineers are clever folk and are more than capable of abusing standards like this when it serves to save a buck. Turning Two Back Into One We've got lots of APIs that expose monitor information in the system, and across which we might be able to wave our magic abstraction wand to fix this:
  1. The KMS API. This is the kernel interface which is used by all graphics stuff, including user-space applications and the frame buffer console. Solve the problem here and it works everywhere automatically.
  2. The libdrm API. This is just the KMS ioctls wrapped in a simple C library. Fixing things here wouldn't make fbcons work, but would at least get all of the window systems working.
  3. Every 2D X driver. (Yeah, we're trying to replace all of these with the one true X driver). Fixing the problem here would mean that all X desktops would work. However, that's a lot of code to hack, so we'll skip this.
  4. The X server RandR code. More plausible than fixing every driver, this also makes X desktops work.
  5. The RandR library. If not in the X server itself, how about over in user space in the RandR protocol library? Well, the problem here is that we've now got two of them (Xlib and xcb), and the xcb one is auto-generated from the protocol descriptions. Not plausible.
  6. The Xinerama code in the X server. Xinerama is how we did multi-monitor stuff before RandR existed. These days, RandR provides Xinerama emulation, but we've been telling people to switch to RandR directly.
  7. Some new API. Awesome. Ok, so if we haven't fixed this in any existing API we control (kernel/libdrm/X.org), then we effectively dump the problem into the laps of the desktop and application developers. Given how long it's taken them to adopt current RandR stuff, providing yet another complication in their lives won't make them very happy.
All Our APIs Suck Dave Airlie merged MST support into the kernel for version 3.17 in the simplest possible fashion -- pushing the problem out to user space. I was initially vaguely tempted to go poke at it and try to fix things there, but he eventually convinced me that it just wasn't feasible. It turns out that all of our fancy new modesetting APIs describe the hardware in more detail than any application actually cares about. In particular, we expose a huge array of hardware objects: Each of these objects exposes intimate details about the underlying hardware -- which of them can work together, and which cannot; what kinds of limits are there on data rates and formats; and pixel-level timing details about blanking periods and refresh rates. To make things work, some piece of code needs to actually hook things up, and explain to the user why the configuration they want just isn't possible. The sticking point we reached was that when an MST monitor gets plugged in, it needs two CRTCs to drive it. If one of those is already in use by some other output, there's just no way you can steal it for MST mode. Another problem -- we expose EDID data and actual video mode timings. Our MST monitor has two EDID blocks, one for each half. They happen to describe how they're related, and how you should configure them, but if we want to hide that from the application, we'll have to pull those EDID blocks apart and construct a new one. The same goes for video modes; we'll have to construct ones for MST mode. Every single one of our APIs exposes enough of this information to be dangerous. Every one, except Xinerama. All it talks about is a list of rectangles, each of which represents a logical view into the desktop. Did I mention we've been encouraging people to stop using this? And that some of them listened to us? Foolishly? Dave's Tiling Property Dave hacked up the X server to parse the EDID strings and communicate the layout information to clients through an output property. Then he hacked up the gnome code to parse that property and build a RandR configuration that would work. Then, he changed to RandR Xinerama code to also parse the TILE properties and to fix up the data seen by application from that. This works well enough to get a desktop running correctly, assuming that desktop uses Xinerama to fetch this data. Alas, gtk has been "fixed" to use RandR if you have RandR version 1.3 or later. No biscuit for us today. Adding RandR Monitors RandR doesn't have enough data types yet, so I decided that what we wanted to do was create another one; maybe that would solve this problem. Ok, so what clients mostly want to know is which bits of the screen are going to be stuck together and should be treated as a single unit. With current RandR, that's some of the information included in a CRTC. You pull the pixel size out of the associated mode, physical size out of the associated outputs and the position from the CRTC itself. Most of that information is available through Xinerama too; it's just missing physical sizes and any kind of labeling to help the user understand which monitor you're talking about. The other problem with Xinerama is that it cannot be configured by clients; the existing RandR implementation constructs the Xinerama data directly from the RandR CRTC settings. Dave's Tiling property changes edit that data to reflect the union of associated monitors as a single Xinerama rectangle. Allowing the Xinerama data to be configured by clients would fix our 4k MST monitor problem as well as solving the longstanding video wall, WiDi and VNC troubles. All of those want to create logical monitor areas within the screen under client control What I've done is create a new RandR datatype, the "Monitor", which is a rectangular area of the screen which defines a rectangular region of the screen. Each monitor has the following data: There are three requests to define, delete and list monitors. And that's it. Now, we want the list of monitors to completely describe the environment, and yet we don't want existing tools to break completely. So, we need some way to automatically construct monitors from the existing RandR state while still letting the user override portions of it as needed to explain virtual or tiled outputs. So, what I did was to let the client specify a list of outputs for each monitor. All of the CRTCs which aren't associated with an output in any client-defined monitor are then added to the list of monitors reported back to clients. That means that clients need only define monitors for things they understand, and they can leave the other bits alone and the server will do something sensible. The second tricky bit is that if you specify an empty rectangle at 0,0 for the pixel geometry, then the server will automatically compute the geometry using the list of outputs provided. That means that if any of those outputs get disabled or reconfigured, the Monitor associated with them will appear to change as well. Current Status Gtk+ has been switched to use RandR for RandR versions 1.3 or later. Locally, I hacked libXrandr to override the RandR version through an environment variable, set that to 1.2 and Gtk+ happily reverts back to Xinerama and things work fine. I suspect the plan here will be to have it use the new Monitors when present as those provide the same info that it was pulling out of RandR's CRTCs. KDE appears to still use Xinerama data for this, so it "just works". Where's the code As usual, all of the code for this is in a collection of git repositories in my home directory on fd.o:
git://people.freedesktop.org/~keithp/randrproto master
git://people.freedesktop.org/~keithp/libXrandr master
git://people.freedesktop.org/~keithp/xrandr master
git://people.freedesktop.org/~keithp/xserver randr-monitors
RandR protocol changes Here's the new sections added to randrproto.txt
                   
1.5. Introduction to version 1.5 of the extension
Version 1.5 adds monitors
   A 'Monitor' is a rectangular subset of the screen which represents
   a coherent collection of pixels presented to the user.
   Each Monitor is be associated with a list of outputs (which may be
   empty).
   When clients define monitors, the associated outputs are removed from
   existing Monitors. If removing the output causes the list for that
   monitor to become empty, that monitor will be deleted.
   For active CRTCs that have no output associated with any
   client-defined Monitor, one server-defined monitor will
   automatically be defined of the first Output associated with them.
   When defining a monitor, setting the geometry to all zeros will
   cause that monitor to dynamically track the bounding box of the
   active outputs associated with them
This new object separates the physical configuration of the hardware
from the logical subsets  the screen that applications should
consider as single viewable areas.
1.5.1. Relationship between Monitors and Xinerama
Xinerama's information now comes from the Monitors instead of directly
from the CRTCs. The Monitor marked as Primary will be listed first.
                   
5.6. Protocol Types added in version 1.5 of the extension
MONITORINFO   name: ATOM
          primary: BOOL
          automatic: BOOL
          x: INT16
          y: INT16
          width: CARD16
          height: CARD16
          width-in-millimeters: CARD32
          height-in-millimeters: CARD32
          outputs: LISTofOUTPUT  
                   
7.5. Extension Requests added in version 1.5 of the extension.
 
    RRGetMonitors
    window : WINDOW
      
    timestamp: TIMESTAMP
    monitors: LISTofMONITORINFO
 
    Errors: Window
    Returns the list of Monitors for the screen containing
    'window'.
    'timestamp' indicates the server time when the list of
    monitors last changed.
 
    RRSetMonitor
    window : WINDOW
    info: MONITORINFO
 
    Errors: Window, Output, Atom, Value
    Create a new monitor. Any existing Monitor of the same name is deleted.
    'name' must be a valid atom or an Atom error results.
    'name' must not match the name of any Output on the screen, or
    a Value error results.
    If 'info.outputs' is non-empty, and if x, y, width, height are all
    zero, then the Monitor geometry will be dynamically defined to
    be the bounding box of the geometry of the active CRTCs
    associated with them.
    If 'name' matches an existing Monitor on the screen, the
    existing one will be deleted as if RRDeleteMonitor were called.
    For each output in 'info.outputs, each one is removed from all
    pre-existing Monitors. If removing the output causes the list of
    outputs for that Monitor to become empty, then that Monitor will
    be deleted as if RRDeleteMonitor were called.
    Only one monitor per screen may be primary. If 'info.primary'
    is true, then the primary value will be set to false on all
    other monitors on the screen.
    RRSetMonitor generates a ConfigureNotify event on the root
    window of the screen.
 
    RRDeleteMonitor
    window : WINDOW
    name: ATOM
 
    Errors: Window, Atom, Value
    Deletes the named Monitor.
    'name' must be a valid atom or an Atom error results.
    'name' must match the name of a Monitor on the screen, or a
    Value error results.
    RRDeleteMonitor generates a ConfigureNotify event on the root
    window of the screen.
                   

9 December 2014

Enrico Zini: radicale-davdroid

Radicale and DAVDroid radicale and DAVdroid appeal to me. Let's try to make the whole thing work. A self-signed SSL certificate Generating the certificate:
    openssl req -nodes -x509 -newkey rsa:2048 -keyout cal-key.pem -out cal-cert.pem -days 3650
    [...]
    Country Name (2 letter code) [AU]:IT
    State or Province Name (full name) [Some-State]:Bologna
    Locality Name (eg, city) []:
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:enricozini.org
    Organizational Unit Name (eg, section) []:
    Common Name (e.g. server FQDN or YOUR name) []:cal.enricozini.org
    Email Address []:postmaster@enricozini.org
Installing it on my phone:
    openssl x509 -in cal-cert.pem -outform DER -out cal-cert.crt
    adb push cal-cert.crt /mnt/sdcard/
    enrico --follow-instructions http://davdroid.bitfire.at/faq/entry/importing-a-certificate
Installing radicale in my VPS An updated radicale package, with this patch to make it work with DAVDroid:
    apt-get source radicale
    # I reviewed 063f7de7a2c7c50de5fe3f8382358f9a1124fbb6
    git clone https://github.com/Kozea/Radicale.git
    Move the python code from git to the Debian source
    dch -v 0.10~enrico  "Pulled in the not yet released 0.10 work from upstream"
    debuild -us -uc -rfakeroot
Install the package:
    # dpkg -i python-radicale_0.10~enrico0-1_all.deb
    # dpkg -i radicale_0.10~enrico0-1_all.deb
Create a system user to run it:
    # adduser --system --disabled-password radicale
Configure it for mod_wsgi with auth done by Apache:
    # For brevity, this is my config file with comments removed
    [storage]
    # Storage backend
    # Value: filesystem   multifilesystem   database   custom
    type = filesystem
    # Folder for storing local collections, created if not present
    filesystem_folder = /var/lib/radicale/collections
    [logging]
    config = /etc/radicale/logging
Create the wsgi file to run it:
    # mkdir /srv/radicale
    # cat <<EOT > /srv/radicale/radicale.wsgi
    import radicale
    radicale.log.start()
    application = radicale.Application()
    EOT
    # chown radicale.radicale /srv/radicale/radicale.wsgi
    # chmod 0755 /srv/radicale/radicale.wsgi
Make radicale commit to git
    # apt-get install python-dulwich
    # cd /var/lib/radicale/collections
    # git init
    # chown radicale.radicale -R /var/lib/radicale/collections/.git
Apache configuration Add a new site to apache:
    $ cat /etc/apache2/sites-available/cal.conf
    # For brevity, this is my config file with comments removed
    <IfModule mod_ssl.c>
    <VirtualHost *:443>
            ServerName cal.enricozini.org
            ServerAdmin enrico@enricozini.org
            Alias /robots.txt /srv/radicale/robots.txt
            Alias /favicon.ico /srv/radicale/favicon.ico
            WSGIDaemonProcess radicale user=radicale group=radicale threads=1 umask=0027 display-name=% GROUP 
            WSGIProcessGroup radicale
            WSGIScriptAlias / /srv/radicale/radicale.wsgi
            <Directory /srv/radicale>
                    # WSGIProcessGroup radicale
                    # WSGIApplicationGroup radicale
                    # WSGIPassAuthorization On
                    AllowOverride None
                    Require all granted
            </Directory>
            <Location />
                    AuthType basic
                    AuthName "Enrico's Calendar"
                    AuthBasicProvider file
                    AuthUserFile /usr/local/etc/radicale/htpasswd
                    Require user enrico
            </Location>
            ErrorLog APACHE_LOG_DIR /cal-enricozini-org-error.log
            LogLevel warn
            CustomLog APACHE_LOG_DIR /cal-enricozini-org-access.log combined
            SSLEngine on
            SSLCertificateFile    /etc/ssl/certs/cal.pem
            SSLCertificateKeyFile /etc/ssl/private/cal.key
    </VirtualHost>
    </IfModule>
Then enable it:
    # a2ensite cal.conf
    # service apache2 reload
Create collections DAVdroid seems to want to see existing collections on the server, so we create them:
    $ apt-get install cadaver
    $ cat <<EOT > /tmp/empty.ics
    BEGIN:VCALENDAR
    VERSION:2.0
    END:VCALENDAR
    EOT
    $ cat <<EOT > /tmp/empty.vcf
    BEGIN:VCARD
    VERSION:2.1
    END:VCARD
    EOT
    $ cadaver https://cal.enricozini.org
    WARNING: Untrusted server certificate presented for  cal.enricozini.org':
    [...]
    Do you wish to accept the certificate? (y/n) y
    Authentication required for Enrico's Calendar on server  cal.enricozini.org':
    Username: enrico
    Password: ****
    dav:/> cd enrico/contacts.vcf/
    dav:/> put /tmp/empty.vcf
    dav:/> cd ../calendar.ics/
    dav:/> put /tmp/empty.ics
    dav:/enrico/calendar.ics/> ^D
    Connection to  cal.enricozini.org' closed.
DAVdroid configuration
  1. Add a new DAVdroid sync account
  2. Use server/username configuration
  3. For server, use https:////
  4. Add username and password
It should work. Related links

16 November 2014

Vincent Bernat: Replacing Swisscom router by a Linux box

I have recently moved to Lausanne, Switzerland. Broadband Internet access is not as cheap as in France. Free, a French ISP, is providing an FTTH access with a bandwith of 1 Gbps1 for about 38 (including TV and phone service), Swisscom is providing roughly the same service for about 200 2. Swisscom fiber access was available for my appartment and I chose the 40 Mbps contract without phone service for about 80 . Like many ISP, Swisscom provides an Internet box with an additional box for TV. I didn t unpack the TV box as I have no use for it. The Internet box comes with some nice features like the ability to setup firewall rules, a guest wireless access and some file sharing possibilities. No shell access! I have bought a small PC to act as router and replace the Internet box. I have loaded the upcoming Debian Jessie on it. You can find the whole software configuration in a GitHub repository. This blog post only covers the Swisscom-specific setup (and QoS). Have a look at those two blog posts for related topics:

Ethernet The Internet box is packed with a Siligence-branded 1000BX SFP3. This SFP receives and transmits data on the same fiber using a different wavelength for each direction. Instead of using a network card with an SFP port, I bought a Netgear GS110TP which comes with 8 gigabit copper ports and 2 fiber SFP ports. It is a cheap switch bundled with many interesting features like VLAN and LLDP. It works fine if you don t expect too much from it.

IPv4 IPv4 connectivity is provided over VLAN 10. A DHCP client is mandatory. Moreover, the DHCP vendor class identifier option (option 60) needs to be advertised. This can be done by adding the following line to /etc/dhcp/dhclient.conf when using the ISC DHCP client:
send vendor-class-identifier "100008,0001,,Debian";
The first two numbers are here to identify the service you are requesting. I suppose this can be read as requesting the Swisscom residential access service. You can put whatever you want after that. Once you get a lease, you need to use a browser to identify yourself to Swisscom on the first use.

IPv6 Swisscom provides IPv6 access through the 6rd protocol. This is a tunneling mechanism to facilitate IPv6 deployment accross an IPv4 infrastructure. This kind of tunnel is natively supported by Linux since kernel version 2.6.33. To setup IPv6, you need the base IPv6 prefix and the 6rd gateway. Some ISP are providing those values through DHCP (option 212) but this is not the case for Swisscom. The gateway is 6rd.swisscom.com and the prefix is 2a02:1200::/28. After appending the IPv4 address to the prefix, you still get 4 bits for internal subnets. Swisscom doesn t provide a fixed IPv4 address. Therefore, it is not possible to precompute the IPv6 prefix. When installed as a DHCP hook (in /etc/dhcp/dhclient-exit-hooks.d/6rd), the following script configures the tunnel:
sixrd_iface=internet6
sixrd_mtu=1472                  # This is 1500 - 20 - 8 (PPPoE header)
sixrd_ttl=64
sixrd_prefix=2a02:1200::/28     # No way to guess, just have to know it.
sixrd_br=193.5.29.1             # That's "6rd.swisscom.com"
sixrd_down()  
    ip tunnel del $ sixrd_iface    true
 
sixrd_up()  
    ipv4=$ new_ip_address:-$old_ip_address 
    sixrd_subnet=$(ruby <<EOF
require 'ipaddr'
prefix = IPAddr.new "$ sixrd_prefix ", Socket::AF_INET6
prefixlen = $ sixrd_prefix#*/ 
ipv4 = IPAddr.new "$ ipv4 ", Socket::AF_INET
ipv6 = IPAddr.new (prefix.to_i + (ipv4.to_i << (64 + 32 - prefixlen))), Socket::AF_INET6
puts ipv6
EOF
)
    # Let's configure the tunnel
    ip tunnel add $ sixrd_iface  mode sit local $ipv4 ttl $sixrd_ttl
    ip tunnel 6rd dev $ sixrd_iface  6rd-prefix $ sixrd_prefix 
    ip addr add $ sixrd_subnet 1/64 dev $ sixrd_iface 
    ip link set mtu $ sixrd_mtu  dev $ sixrd_iface 
    ip link set $ sixrd_iface  up
    ip route add default via ::$ sixrd_br  dev $ sixrd_iface 
 
case $reason in
    BOUND REBOOT)
        sixrd_down
        sixrd_up
        ;;
    RENEW REBIND)
        if [ "$new_ip_address" != "$old_ip_address" ]; then
            sixrd_down
            sixrd_up
        fi
        ;;
    STOP EXPIRE FAIL RELEASE)
        sixrd_down
        ;;
esac
The computation of the IPv6 prefix is offloaded to Ruby instead of trying to use the shell for that. Even if the ipaddr module is pretty basic , it suits the job. Swisscom is using the same MTU for all clients. Because some of them are using PPPoE, the MTU is 1472 instead of 1480. You can easily check your MTU with this handy online MTU test tool. It is not uncommon that PMTUD is broken on some parts of the Internet. While not ideal, setting up TCP MSS will alievate any problem you may run into with a MTU less than 1500:
ip6tables -t mangle -A POSTROUTING -o internet6 \
          -p tcp --tcp-flags SYN,RST SYN \
          -j TCPMSS --clamp-mss-to-pmtu

QoS UPDATED: Unfortunately, this section is incorrect, including its premise. Have a look at Dave Taht comment for more details. Once upon a time, QoS was a tacky subject. The Wonder Shaper was a common way to get a somewhat working setup. Nowadays, thanks to the work of the Bufferbloat project, there are two simple steps to get something quite good:
  1. Reduce the queue of your devices to something like 32 packets. This helps TCP to detect congestion and act accordingly while still being able to saturate a gigabit link.
    ip link set txqueuelen 32 dev lan
    ip link set txqueuelen 32 dev internet
    ip link set txqueuelen 32 dev wlan
    
  2. Change the root qdisc to fq_codel. A qdisc receives packets to be sent from the kernel and decide how packets are handled to the network card. Packets can be dropped, reordered or rate-limited. fq_codel is a queuing discipline combining fair queuing and controlled delay. Fair queuing means that all flows get an equal chance to be served. Another way to tell it is that a high-bandwidth flow won t starve the queue. Controlled delay means that the queue size will be limited to ensure the latency stays low. This is achieved by dropping packets more aggressively when the queue grows.
    tc qdisc replace dev lan root fq_codel
    tc qdisc replace dev internet root fq_codel
    tc qdisc replace dev wlan root fq_codel
    

  1. Maximum download speed is 1 Gbps, while maximum upload speed is 200 Mbps.
  2. This is the standard Vivo XL package rated at CHF 169. plus the 1 Gbps option at CHF 80. .
  3. There are two references on it: SGA 441SFP0-1Gb and OST-1000BX-S34-10DI. It transmits to the 1310 nm wave length and receives on the 1490 nm one.

10 November 2014

Neil Williams: On getting NEW packages into stable

There s a lot of discussion / moaning /arguing at this time, so I thought I d post something about how LAVA got into Debian Jessie, the work involved and the lessons I ve learnt. Hopefully, it will help someone avoid the disappointment of having their package missing the migration into a future stable release. This was going to be a talk at the Minidebconf-uk in Cambridge but I decided to put this out as a permanent blog entry in the hope that it will be a useful reference for the future, not just Jessie. Context LAVA relies on a number of dependencies which were at the time all this started NEW to Debian as well as many others already in Debian. I d been running LAVA using packages on my own system for a few months before the packages were ready for use on the main servers (I never actually installed LAVA using the old virtualenv method on my own systems, except in a VM). I did do quite a lot of this on my own but I also had a team supporting the effort and valuing the benefits of moving to a packaged system. At the time, LAVA was based on Ubuntu (12.04 LTS Precise Pangolin) and a new Ubuntu LTS was close (Trusty Tahr 14.04) but I started work on this in 2013. By the time my packages were ready for general usage, it was winter 2013 and much too close to get anything into Ubuntu in time for Trusty. So I started a local repo using space provided by Linaro. At the same time, I started uploading the dependencies to Debian. json-schema-validator, django-testscenarios and others arrived in April and May 2014. (Trusty was released in April). LAVA arrived in NEW in May, being accepted into unstable at the end of June. LAVA arrived in testing for the first time in July 2014. Upstream development continued apace and a regular monthly upload, with some hotfixes in between, continued until close to the freeze. At this point, note that although upstream is a medium sized team, the Debian packaging also has a team but all the uploads were made by me. I planned ahead. I knew that I would be going to Macau for Linaro Connect in February a critical stage in the finalisation of the packages and the migration of existing instances from the old methods. I knew that I would be on vacation from August through to the end of September 2014 including at least two weeks with absolutely no connectivity of any kind. Right at this time, Django1.7 arrived in experimental with the intent to go into unstable and hence into Jessie. This was a headache for me, I initially sought to delay the migration until after Jessie. However, we discussed it upstream, allocated time within the busy schedule and also sought help from within Debian with the RFH tag. Rapha l Hertzog contributed patches for django1.7 support and we worked on those patches upstream, once I was back from vacation. (The final week of my vacation was a work conference, so we had everyone together at one hacking table.) Still there was more to do, the django1.7 patches allowed the unit tests to complete but broke other parts of the lava-server package and needed subsequent tweaks and fixes. Even with all this, the auto-removal from testing for packages affected by RC bugs in their dependencies became very important to monitor (it still is). It would be useful if some packages had less complex dependency chains (I m looking at you, uwsgi) as the auto-removal also covers build-depends. This led to some more headaches with libmatheval. I m not good with functional programming languages, I did have some exposure to Scheme when working on Gnucash upstream but it wasn t pleasant. The thought of fixing a scheme problem in the test suite of libmatheval was daunting. Again though, asking for help, I found people in the upstream team who wanted to refresh their use of scheme and were able to help out. The fix migrated into testing in October. Just for added complications, lava-server gained a few RC bugs of it s own during that time too fixed upstream but awkward nonetheless. Achievement unlocked So that s how a complex package like lava-server gets into stable. With a lot of help. The main problem with top-level packages like this is the sheer weight of the dependency chain. Something seemingly unrelated (like libmatheval) can seriously derail the migrations. The package doesn t use the matheval support provided by uwsgi. The bug in matheval wasn t in the parts of matheval used by uwsgi. It wasn t in a language I am at all comfortable in fixing but it s my name on the changelog of the NMU. That happened because I asked for help. OK, when django1.7 was scheduled to arrive in Debian unstable and I knew that lava was not ready, I reacted out of fear and anxiety. However, I sought help, help was provided and that help was enough to get upstream to a point where the side-effects of the required changes could be fixed. Maintaining a top-level package in Debian is becoming more like maintaining a core package in Debian and that is a good thing. When your package has a lot of dependencies, those dependencies become part of the maintenance workload of your package. It doesn t matter if those are install time dependencies, build dependencies or reverse dependencies. It doesn t actually matter if the issues in those packages are in languages you would personally wish to be expunged from the archive. It becomes your problem but not yours alone. Debian has a lot of flames right now and Enrico encouraged us to look at what else is actually happening in Debian besides those arguments. Well, on top of all this with lava, I also did what I could to help the arm64 port along and I m very happy that this has been accepted into Jessie as an official release architecture. That s a much bigger story than LAVA yet LAVA was and remains instrumental in how arm64 gained the support in the kernel and various upstreams which allowed patches to be accepted and fixes to be incorporated into Debian packages. So a roll call of helpers who may otherwise not have been recognised via changelogs, in no particular order: Also general thanks to the Debian FTP and Release teams. Lessons learnt
  1. Allow time! None of the deadlines or timings involved in this entire process were hidden or unexpected. NEW always takes a finite but fairly lengthy amount of time but that was the only timeframe with any amount of uncertainty. That is actually a benefit it reminds you that this entire process is going to take a significant amount of time and the only loser if you try to rush it is going to be you and your package. Plan for the time and be sceptical about how much time is actually required.
  2. Ask for help! Everyone in Debian is a volunteer. Yes, the upstream for this project is a team of developers paid to work on this code (and largely only this code) but the upstream also has priorities, requirements, objectives and deadlines. It s no good expecting upstream to do everything. It s no good leaving upstream insufficient time to fit the required work into the existing upstream schedules. So ask for help within upstream and within Debian ask for help wherever you can. You don t know who may be able to help you until you ask. Be clear when asking for help how would someone test their proposed fix? Exactly what are you asking for help doing? (Hint: everything is not a good answer.)
  3. Keep on top of announcements and changes. The release team in Debian have made the timetable strict and have published regular updates, guidelines and status notes. As maintainer, it is your responsibility to keep up with those changes and make others in the upstream team aware of the changes and the implications. Upstream will rely on you to provide accurate information about these requirements. This is almost more important than actually providing the uploads or fixes. Without keeping people informed, even asking for help can turn out to be counter-productive. Communicate within Debian too talk to the teams, send status updates to bugs (even if the status is tag 123456 + help).
  4. Be realistic! Life happens around us, things change, personal timetables get torn up. Time for voluntary activity can appear and disappear (it tends to disappear far more often than extends, so take that into account too).
  5. Do not expect others to do the work for you asking for help is one thing, leaving the work to others is quite another. No complaining to the release team that they are blocking your work and avoid pleading or arguing when a decision is made. The policies and procedures within Debian are generally clear and there are quite enough arguments without adding more. Read the policies, read the guidelines, watch how other packages and other maintainers are handled and avoid those mistakes. Make it easy for others to help deliver what you want.
  6. Get to know your dependency chain follow the links on the packages.debian.org pages and get a handle on which packages are relevant to your package. Subscribe to the bug pages for some of the more high-risk packages. There are tools to help. rc-alert can help you spot problems with runtime dependencies (you do have your own package installed on a system running unstable if not, get that running NOW). Watching build-dependencies is more difficult, especially build-dependencies of a runtime dependency, so watch the RC bug lists for packages in your dependency chain.
Above all else, remember why you and upstream want the packages in Debian in the first place. Debian is a respected distribution and has an acknowledged reputation for stability and portability. The very qualities that you and your upstream desire from having your package in Debian have direct implications for the amount of work and the amount of time that will be required to get your packages into Debian and keep them there. Having your package in Debian will bring considerable benefits but you will be required to invest a considerable amount of time. It is this contribution which is valuable to Debian and it is this work which will deliver the benefits you seek. Being an expert in the one package is wildly inadequate. Debian is about the system, the whole distribution and sooner or later, you as the maintainer will be absolutely required to handle something which is so far out of your comfort zone it s untrue. The reality is that you are not expected to fix that problem you are expected to handle that problem and that includes seeking and acknowledging the help of others. The story isn t over until release day. Having your package in testing the day before the freeze is one step. It may be a large step, but it is only one. The status of that package still needs monitoring. That long dependency chain can still come back and bite. Don t wait for problems to surprise you. Finally One thing I do ask is that other upstream teams and maintainers think about the dependency chain they are creating. It may sound nice to have bindings for every interpreted language possible when releasing your compiled library but it does not help people using that library. Yes, it is more work releasing the bindings separately because a stable API is going to be needed to allow version 1.2.3 to work alongside 1.2.2 and 1.3.0 or the entire effort is pointless. Consider how your upstream package migrates. Consider how adding yet another build-dependency for an optional component makes things exponentially harder for those who need to rely on that upstream. If it is truly optional, release it separately and keep backwards compatibility on each side. It is more work but in reality, all that is happening is that the work is being transferred from the distribution (where it helps only that one distribution and causes duplication into other distributions) into the upstream (where it helps all distributions). Think carefully about what constitutes core functionality and release the rest separately. Combining bindings for php, ruby, python, java, lua and xslt into a single upstream release tarball is a complete nonsense. It simply means that the package gets blocked from new uploads by the constant churn of being involved in every transition that occurs in the distribution. There is a very real risk that the package will miss a stable release simply by having fingers in too many pies. That hurts not only this upstream but every upstream trying to use any part of your code. Every developer likes to think that people are using and benefiting from their effort. It s not nice to directly harm the interests of other developers trying to use your code. It is not enough for the binary packages to be discrete migrations happen by source package and the released tarball needs to not include the optional bindings. It must be this way because it is the source package which determines whether version 1.2.3 of plugin foo can work with version 1.2.0 of the library as well as with version 1.3.0. Maintainers regularly deal with these issues so talk to your upstream teams and explain why this is important to that particular team. Help other maintainers use your code and help make it easier to make a stable release of Debian. The quicker the freeze & release process becomes, the quicker new upstream versions can be uploaded and backported.

31 October 2014

Russell Coker: Links October 2014

The Verge has an interesting article about Tim Cook (Apple CEO) coming out [1]. Tim says if hearing that the CEO of Apple is gay can help someone struggling to come to terms with who he or she is, or bring comfort to anyone who feels alone, or inspire people to insist on their equality, then it s worth the trade-off with my own privacy . Graydon2 wrote an insightful article about the right-wing libertarian sock-puppets of silicon valley [2]. George Monbiot wrote an insightful article for The Guardian about the way that double-speak facilitates killing people [3]. He is correct that the media should hold government accountable for such use of language instead of perpetuating it. Anne Th riault wrote an insightful article for Vice about the presumption of innocence and sex crimes [4]. Dr Nerdlove wrote an interesting article about Gamergate as the extinction burst of gamer culture [5], we can only hope. Shweta Narayan wrote an insightful article about Category Structure and Oppression [6]. I can t summarise it because it s a complex concept, read the article. Some Debian users who don t like Systemd have started a Debian Fork project [7], which so far just has a web site and nothing else. I expect that they will never write any code. But it would be good if they did, they would learn about how an OS works and maybe they wouldn t disagree so much with the people who have experience in developing system software. A GamerGate terrorist in Utah forces Anita Sarkeesian to cancel a lecture [8]. I expect that the reaction will be different when (not if) an Islamic group tries to get a lecture cancelled in a similar manner. Model View Culture has an insightful article by Erika Lynn Abigail about Autistics in Silicon Valley [9]. Katie McDonough wrote an interesting article for Salon about Ed Champion and what to do about men who abuse women [10]. It s worth reading that while thinking about the FOSS community

30 September 2014

Russell Coker: Links September 2014

Matt Palmer wrote a short but informative post about enabling DNS in a zone [1]. I really should setup DNSSEC on my own zones. Paul Wayper has some insightful comments about the Liberal party s nasty policies towards the unemployed [2]. We really need a Basic Income in Australia. Joseph Heath wrote an interesting and insightful article about the decline of the democratic process [3]. While most of his points are really good I m dubious of his claims about twitter. When used skillfully twitter can provide short insights into topics and teasers for linked articles. Sarah O wrote an insightful article about NotAllMen/YesAllWomen [4]. I can t summarise it well in a paragraph, I recommend reading it all. Betsy Haibel wrote an informative article about harassment by proxy on the Internet [5]. Everyone should learn about this before getting involved in discussions about controversial issues. George Monbiot wrote an insightful and interesting article about the referendum for Scottish independence and the failures of the media [6]. Mychal Denzel Smith wrote an insightful article How to know that you hate women [7]. Sam Byford wrote an informative article about Google s plans to develop and promote cheap Android phones for developing countries [8]. That s a good investment in future market share by Google and good for the spread of knowledge among people all around the world. I hope that this research also leads to cheap and reliable Android devices for poor people in first-world countries. Deb Chachra wrote an insightful and disturbing article about the culture of non-consent in the IT industry [9]. This is something we need to fix. David Hill wrote an interesting and informative article about the way that computer game journalism works and how it relates to GamerGate [10]. Anita Sarkeesian shares the most radical thing that you can do to support women online [11]. Wow, the world sucks more badly than I realised. Michael Daly wrote an article about the latest evil from the NRA [12]. The NRA continues to demonstrate that claims about good people with guns are lies, the NRA are evil people with guns.

31 August 2014

Russell Coker: Links August 2014

Matt Palmer wrote a good overview of DNSSEC [1]. Sociological Images has an interesting article making the case for phasing out the US $0.01 coin [2]. The Australian $0.01 and $0.02 coins were worth much more when they were phased out. Multiplicity is a board game that s designed to address some of the failings of SimCity type games [3]. I haven t played it yet but the page describing it is interesting. Carlos Buento s article about the Mirrortocracy has some interesting insights into the flawed hiring culture of Silicon Valley [4]. Adam Bryant wrote an interesting article for NY Times about Google s experiments with big data and hiring [5]. Among other things it seems that grades and test results have no correlation with job performance. Jennifer Chesters from the University of Canberra wrote an insightful article about the results of Australian private schools [6]. Her research indicates that kids who go to private schools are more likely to complete year 12 and university but they don t end up earning more. Kiwix is an offline Wikipedia reader for Android, needs 9.5G of storage space for the database [7]. Melanie Poole wrote an informative article for Mamamia about the evil World Congress of Families and their connections to the Australian government [8]. The BBC has a great interactive web site about how big space is [9]. The Raspberry Pi Spy has an interesting article about automating Minecraft with Python [10]. Wired has an interesting article about the Bittorrent Sync platform for distributing encrypted data [11]. It s apparently like Dropbox but encrypted and decentralised. Also it supports applications on top of it which can offer social networking functions among other things. ABC news has an interesting article about the failure to diagnose girls with Autism [12]. The AbbottsLies.com.au site catalogs the lies of Tony Abbott [13]. There s a lot of work in keeping up with that. Racialicious.com has an interesting article about Moff s Law about discussion of media in which someone says why do you have to analyze it [14]. Paul Rosenberg wrote an insightful article about conservative racism in the US, it s a must-read [15]. Salon has an interesting and amusing article about a photography project where 100 people were tased by their loved ones [16]. Watch the videos.

31 July 2014

Russell Coker: Links July 2014

Dave Johnson wrote an interesting article for Salon about companies ripping off the tax system by claiming that all their income is produced in low tax countries [1]. Seb Lee-Delisle wrote an insightful article about how to ask to get paid to speak [2]. I should do that. Daniel Pocock wrote an informative article about the reConServer simple SIP conferencing server [3]. I should try it out, currently most people I want to conference with are using Google Hangouts, but getting away from Google is a good thing. Fran ois Marier wrote an informative post about hardening ssh servers [4]. S. E. Smith wrote an interesting article I Am Tired of Hearing Programmers Defend Gender Essentialism [5]. Bert Archer wrote an insightful article about lazy tourism [6]. His initial example of love locks breaking bridges was a bit silly (it s not difficult to cut locks off a bridge) but his general point about lazy/stupid tourism is good. Daniel Pocock wrote an insightful post about new developments in taxis, the London Taxi protest against Uber, and related changes [7]. His post convinced me that Uber is a good thing and should be supported. I checked the prices and unfortunately Uber is more expensive than normal taxis for my most common journey. Cory Doctorow wrote an insightful article for The Guardian about the moral issues related to government spying [8]. The Verge has an interesting review of the latest Lytro Lightbox camera [9]. Not nearly ready for me to use, but interesting technology. Prospect has an informative article by Kathryn Joyce about the Protestant child sex abuse scandal in the US [10]. Billy Graham s grandson is leading the work to reform churches so that they protect children instead of pedophiles. Prospect also has an article by Kathryn Joyce about Christians home-schooling kids to try and program them to be zealots and how that hurts kids [11]. The Daily Beast has an interesting article about the way that the extreme right wing in the US are trying to kill people, it s the right wing death panel [12]. Jay Michaelson wrote an informative article for The Daily Beast about right-wing hate groups in the US who promote the extreme homophobic legislation in Russia and other countries [13]. It also connects to the Koch brothers who seem to be associated with most evil. Elias Isquith wrote an insightful article for Salon about the current right-wing obsession with making homophobic discrimination an issue of religious liberty will hurt religious people [14]. He also describes how stupid the right-wing extremists are in relation to other issues too. EconomixComix.com has a really great comic explaning the economics of Social Security in the US [15]. They also have a comic explaining the TPP which is really good [16]. They sell a comic book about economics which I m sure is worth buying. We need to have comics explaining all technical topics, it s a good way of conveying concepts. When I was in primary school my parents gave me comic books covering nuclear physics and other science topics which were really good. Mia McKenzie wrote an insightful article for BlackGirlDangerous.com about dealing with racist white teachers [17]. I think that it would be ideal to have a school dedicated to each minority group with teachers from that group.

30 June 2014

Russell Coker: Links June 2014

Russ Albery wrote an insightful blog post about trust, computer security, and training programmers [1]. He makes a good case that social problems in our community decrease the availability of skilled people to write and audit security code. The Lawfare blog has an insightful article by Dan Geer about Heartbleed as a Metaphor [2]. He makes some good points about security and design, ways of potentially solving some flaws and problems with the various solutions. Eben Moglen wrote an insightful article for The Guardian about the way that the NSA spying is a direct threat to democracy [3] The TED blog has an interesting interview with Kitra Cahana about her work living with and photographing nomads in the US [4]. I was surprised to learn that there s an active nomad community in the US based on the culture that started in the Great Depression. Apparently people are using Youtube to learn about nomad culture before joining. Dave Johnson wrote an interesting Salon article about why CEOs make 300* as much money as workers [5]. Note that actually contributing to the financial success of the company is not one of the reasons. Maia Szalavitz wrote an interesting Slate article about Autism and Anorexia [6]. Apparently some people on the Autism Spectrum are mis-diagnosed with Anorexia due to food intolerance. Groups of four professors have applied for the job of president and vice-chancellor of the University of Alberta [7]. While it was a joke to apply in that way, 1/4 of the university president s salary is greater than the salary of a professor and the university would get a team of 4 people to do the job so it would really make sense to hire them. Of course the university could just pay a more reasonable salary for the president and hire an extra 3 professors. But the same argument applies for lots of highly paid jobs. Is a CEO who gets paid $10M per annim really going to do a better job than a team of 100 people who are paid $100K? Joel on Software wrote an insightful article explaining why hiring 1/200 applicants doesn t mean you hire the top 0.5% of workers [8]. He suggests that the best employees almost never apply through regular channels so an intern program is the only way to get a chance of hiring the best people. Chaotic Idealism has an interesting article on some of the bogus claims about autism and violence [9]. Salon has an interesting articleby Lindsay Abrams about the way the food industry in the US lobbies for laws to prevent employees from reporting animal cruelty or contamination of the food supply and how drones will now be used for investigative journalism [10]. Jacobin Mag has an interesting article by Geoff Shullenberger about the Voluntariat , the people who volunteer their time to help commercial organisations [11]. I don t object to people voluntarily helping companies, but when they are exploited or when the company also requires voluntary help from the government it becomes a problem. We need some legislation about this. Laura Hudson wrote an insightful article about how Riot Games solved their online abuse problem [12]. There are ideas in this that can apply to all online communities. Matt LeMay wrote an interesting article for Medium titled What (Else) Can Men Do? Grow The Fuck Up [13]. It s a general commentary on the treatment of women in geek communities (and most other places). Foz Meadows wrote an insightful analysis of the attempts of bigots to influence science-fiction [14]. If I had more spare time I d read some of the books by bigoted authors on the Sad Puppy Slate (from a library of course) and see if they lack talent in the same way that Orson Scott Card does. Racialicious has an interesting article by Phenderson Djeli Clark about the horrible racism and bigotry of H.P. Lovecraft [15]. I have only read two HP Lovecraft stories, one was mediocre and the other (The Horroe at Red Hook) was quite poor largely due to his inability to write about non-white people. Grace Wyler wrote an insightful article for Vice magazine about the right-wing terrorists in the US killing cops [16]. Paul Rosenberg wrote an interesting and amusing (for people outside the US) article about the gun crazies in the US [17]. Maybe the gun crazies should have a loaded assault rifles and tequila party to follow up on their gun appreciation day . A US TV show made a 4 minute clip of some of the stupid things that Tony Abbott has done [18]. Tony is almost as stupid as Dubya.

12 April 2014

Russ Allbery: Accumulated haul

Wow, it's been a while since I've done this. In part because I've not had much time for reading books (which doesn't prevent me from buying them). Jared Bernstein & Dean Baker Getting Back to Full Employment (non-fiction)
James Coughtrey Six Seconds of Moonlight (sff)
Philip J. Davis & Reuben Hersh The Mathematical Experience (non-fiction)
Debra Dunbar A Demon Bound (sff)
Andy Duncan & Ellen Klages Wakulla Springs (sff)
Dave Eggers & Jordan Bass The Best of McSweeny's (mainstream)
Siri Hustvedt The Blazing World (mainstream)
Jacqueline Koyanagi Ascension (sff)
Ann Leckie Ancillary Justice (sff)
Adam Lee Dark Heart (sff)
Seanan McGuire One Salt Sea (sff)
Seanan McGuire Ashes of Honor (sff)
Seanan McGuire Chimes at Midnight (sff)
Seanan McGuire Midnight Blue-Light Special (sff)
Seanan McGuire Indexing (sff)
Naomi Mitchinson Travel Light (sff)
Helaine Olen Pound Foolish (non-fiction)
Richard Powers Orfeo (mainstream)
Veronica Schanoes Burning Girls (sff)
Karl Schroeder Lockstep (sff)
Charles Stross The Bloodline Feud (sff)
Charles Stross The Traders' War (sff)
Charles Stross The Revolution Trade (sff)
Matthew Thomas We Are Not Ourselves (mainstream)
Kevin Underhill The Emergency Sasquatch Ordinance (non-fiction)
Jo Walton What Makes This Book So Great? (non-fiction) So, yeah. A lot of stuff. I went ahead and bought nearly all of the novels Seanan McGuire had out that I'd not read yet after realizing that I'm going to eventually read all of them and there's no reason not to just own them. I also bought all of the Stross reissues of the Merchant Princes series, even though I had some of the books individually, since I think it will make it more likely I'll read the whole series this way. I have so much stuff that I want to read, but I've not really been in the mood for fiction. I'm trying to destress enough to get back in the mood, but in the meantime have mostly been reading non-fiction or really light fluff (as you'll see from my upcoming reviews). Of that long list, Ancillary Justice is getting a lot of press and looks interesting, and Lockstep is a new Schroeder novel. 'Nuff said. Kevin Underhill is the author of Lowering the Bar, which you should read if you haven't since it's hilarious. I'm obviously looking forward to that. The relatively obscure mainstream novels here are more Powell's Indiespensible books. I will probably cancel that subscription soon, at least for a while, since I'm just building up a backlog, but that's part of my general effort to read more mainstream fiction. (I was a bit disappointed since there were several months with only one book, but the current month finally came with two books again.) Now I just need to buckle down and read. And play video games. And do other things that are fun rather than spending all my time trying to destress from work and zoning in front of the TV.

27 March 2014

Aigars Mahinovs: Photo migration from Flickr to Google Plus

I've been with Flickr since 2005 now, posting a lot of my photos there, so that other poeple from the events, that I usually take photos of, could enjoy them. But lately I've become annoyed with it. It is very slow to uplaod to and even worse to get photos out of it - there is no large shiny button to Download a set of photos, like I noticed in G+. So I decided to try and copy my photos over. I am not abandoning or deleting my Flickr account yet, but we'll see. The process was not as simple as I hoped. There is this FlickrToGpluss website tool. It would have been perfect .. if it worked. In that tool you simply log in to both services, check which albums you want to migrate over and at what photo size and that's it - the service will do the migration directly on their servers. It actually feeds Google the URLs of the Flickr photos so the photos don't even go trought the service itself, only metadata does. Unfortunately I hit a couple snags - first of all the migration stopped progressing a few days and ~20 Gb into the process (out of ~40 Gb). And for the photos that were migrated their titles were empty and their file names were set to Flickr descriptions. Among other things that meant that when you downloaded the album as a zip file with all the photots (which was the feature that I was doing this whole thing for) you got photos in almost random order - namely in the order of their sorted titles. Ugh. So I canceled that migration (by revoking priviledges to that app on G+, there is no other way to see or modify progress there) and sat down to make a manual-ish solution. First, I had to get my photos out of Flickr. For that I took Offlickr and ran it in set mode:
./Offlickr.py -i 98848866@N00 -p -s -c 32
The "98848866@N00" is my Flickr ID which I got from this nice service, then -p to download photos (and not just metadata), -s to download all sets and -c 32 to do the download in 32 parallel threads. An important thing to do is to take all you photos that are not in a set in Flickr and add them to a new 'nonset" so that those photos are also picked up here, there is an option under Organize to select all non-set photos. It worked great, but there were a couple tiny issues:
  1. There is a bug in Offlickr that it does not honor pages in Flickr sets, so it only downloads first 500 images in each set, fix for that is in that bug;
  2. It also wanted Python2.6 for some reason, but worked fine with Python2.7
  3. With that number of threads sometimes Flickr actually failed to respond with the photo, serving a 500 error page instead. Offlickr does not check return code and happily daves that HTML page as the photo. To work around that I simply deleted the HTML errors and then ran the same Offlickr command again so that it re-downloads the missing files. Had to repeat that a few times to get all of them:
ack-grep -l -R "504 Gateway Time-out" dst/   xargs rm
After all that I had my photos, all 40 Gb of them on my computer. Should I upload them to G+ now? Not yet! See the photos all had lost their original file names. It turns out Flickr simply throws that little nugget of information away. It is nowhere to be found, neither in metadata or the UI or the Exif of the photos. Also some of my photos had clever descriptions that I did not want to loose or re-enter in G+ and also geolocation information. Flickr does not embed that info into the Exif of the image, instead it is provided separately - Offlickr saves that as an XML file next to each image. So I wrote a simple and hacky script to re-embed that info. It did 3 things:
  1. Embed title of the photo into the Description EXIF tag, so that G+ automatically picks it up as title of the photo;
  2. Embed the GEO location information into the proper EXIF tags, so that G+ picks that up automatically;
  3. Create a new file name based on original picture taken datetime and EXIF Canon FileNumber field (if such exists), so that all photos in an album are sequential.
It uses exiftool for the actual heavy lifting. After all that was finished I tested the result by uploading a few images to G+ and testing that their title is being set correctly, that they have a sane file name and that geo information works. After that I just uploaded them all. I tried figuring out the G+ API (they actually have it) but I was unable to pass the tutorial, so I abandoned it and simply uploaded the photos of each set int their own tab via a browser. That took a few hours. But that is much faster that with Flickr. Like 4 MB/s versus 0.5 MB/s faster. And here is the result. So far I kind of like it. We'll see how it goes after a year or so. Now on to an even more fun problem - I now have ~40 Gb of photos from Flickr/G+ and ~100 Gb of photos locally. Those sets partially intersect. I know for a fact that there are photos in Flickr set that are not in my local set and it is pretty obvoious that there are some the other way round. Now I need to find them. Oh and I cann't use simple hashes, because Exif has changed and so have the file names for most of them. And not to forget that I often take a burst of 3-4 pictures, so there are bound to be a some near-duplicate photos in each set too. This shall be fun :)

12 March 2014

John Goerzen: Agile Is Dead (Long Live Agility)

In an intriguing post, PragDave laments how empty the word agile has become. To paraphrase, I might say he s put words to a nagging feeling I ve had: that there are entire books about agile, conferences about agile, hallway conversations I ve heard about whether somebody is doing this-or-that agile practice correctly. Which, when it comes down to it, means that they re not being agile. If process and tools, even if they re labeled as agile processes and tools, are king, then we ve simply replaced one productivity-impairing dictator with another. And he makes this bold statement:
Here is how to do something in an agile fashion: What to do:
  • Find out where you are
  • Take a small step towards your goal
  • Adjust your understanding based on what you learned
  • Repeat
How to do it: When faced with two or more alternatives that deliver roughly the same value, take the path that makes future change easier. Those four lines and one practice encompass everything there is to know about effective software development.
He goes on to dive into that a bit, of course, but I think this man has a rare gift of expressing something complicated so succinctly. I am inclined to believe he is right.

25 February 2014

Lucas Nussbaum: self-hosting my calendar

I m trying to self-host my calendar setup, and I must admit that I m lost between all the different solutions. My requirements are: It does not seem to be possible to find a single framework doing all of the above. AFAIK: What did I miss?

18 January 2014

James Bromberger: Linux.conf.au 2014: LCA TV

The radio silence here on my blog has been not from lack of activity, but the inverse. Linux.conf.au chewed up the few remaining spare cycles I have had recently (after family and work), but not from organising the conference (been there, got the T-Shirt and the bag). So, let s do a run through of what has happened LCA2014 Perth has come and gone in pretty smooth fashion. A remarkable effort from the likes of the Perth crew of Luke, Paul, Euan, Leon, Jason, Michael, and a slew of volunteers who stepped up not to mention our interstate firends of Steve and Erin, Matthew, James I, Tim the Streaming guy and others, and our pro organisers at Manhattan Events. It was a reasonably smooth ride: the UWA campus was beautiful, the leacture theatres were workable, and the Octogon Theatre was at its best when filled with just shy of 500 like minded people and an accomplished person gracing the stage. What was impressive (to me, at least) was the effort of the AV team (which I was on the extreme edges of); videos of keynotes hit the Linux Australia mirror within hours of the event. Recording and live streaming of all keynotes and sessions happend almost flawlessly. Leon had built a reasonably robust video capture management system (eventstreamer on github) to ensure that people fresh to DVswitch had nothing break so bad it didn t automatically fix itself and all of this was monitored from the Operations Room (called the TAVNOC, which would have been the AV NOC, but somehow a loose reference to the UWA Tavern the Tav crept in there). Some 167 videos were made and uploaded most of this was also mirrored on campus before th end of the conference so attendees could load up their laptops with plenty of content for the return trip home. Euan s quick Blender work meant there was a nice intro and outro graphic, and Leon s scripting ensured that Zookeepr, the LCA conference manegment software, was the source of truth in getting all videos processed and tagged correctly. I was scheduled (and did give) a presentation at LCA 2014 about Debian on Amazon Web Services (on Thursday), and attended as many of the sessions as possible, but my good friend Michael Davies (LCA 2004 chair, and chair of the LCA Papers Committee for a good many years) had another role for this year. We wanted to capture some of the hallway track of Linux.conf.au that is missed in all the videos of presentations. And thus was born LCA TV. LCA TV consisted of the video equipment for an additional stream mixer host, cameras, cables and switches, hooking into the same streaming framework as the rest of the sessions. We took over a corner of the registration room (UWA Undercroft), brought in a few stage lights, a couch, coffee table, seat, some extra mics, and aimed to fill the session gaps with informal chats with some of the people at Linux.conf.au speakers, attendees, volunteers alike. And come they did. One or two interviews didn t succeed (this was an experiment), but in the end, we ve got over 20 interviews with some interesting people. These streamed out live to the people watching LCA from afar; those unable to make it to Perth in early January; but they were recorded too and we can start to watch them (see below) I was also lucky enough to mix the video for the three keynotes as well as the opening and closing, with very capable crew around the Octogon Theatre. As the curtain came down, and the 2014 crew took to the stage to be congratulated by the attendees, I couldn t help but feel a little bit proud and a touch nostalgic memories from 11 years earlier when LCA 2003 came to a close in the very same venue. So, before we head into the viewing season for LCA TV, let me thank all the volunteers who organised, the AV volunteers, the Registration volunteers, the UWA team who helped with Octogon, Networking, awesome CB Radios hooked up to the UWA repeated that worked all the way to the airport. Thanks to the Speakers who submitted proposals. The Speakers who were accepted, made the journey and took to the stage. The people who attended. The sponsors who help make this happen. All of the above helps share the knowledge, and ultimately, move the community forward. But my thanks to Luke and Paul for agreeing to stand there in the middle of all this madness and hive of semi structured activity that just worked. Please remember this was experimental; the noise was the buzz of the conference going on around us. There was pretty much only one person on the AV kit my thanks to Andrew Cooks who I ll dub as our sound editor, vision director, floor manager, and anything else. So who did we interview? One or two talks did not work, so appologies to those that are missing. Here s the playlist to start you off! Enjoy.

16 June 2013

Daniel Pocock: Monitoring with Ganglia: an O'Reilly community book project

I recently had the opportunity to contribute to an O'Reilly community book project, developing the book Monitoring with Ganglia in collaboration with other members of the Ganglia team

The project itself, as a community book, pays no royalties back to the contributors, as we have chosen to donate all proceeds to charity. People who contributed to the book include
Robert Alexander, Jeff Buchbinder, Frederiko Costa, Alex Dean, Dave Josephsen, Bernard Li, Matt Massie, Brad Nicholes, Peter Phaal and Vladimir Vuksan and we also had generous assistance from various members of the open source community who assisted in the review process. Ganglia itself started at University of California, Berkeley as an initiative of Matt Massie, for monitoring HPC cloud infrastructure My own contact with Ganglia only began in 2008 when I was offered the opportunity to work full-time on the enterprise-wide monitoring systems for a large investment bank. Ganglia had been chosen for this huge project due to it's small footprint, support for many platforms and it's ability to work on a heterogeneous network as well as providing dedicated features for the bank's HPC grid. This brings me to one important point about Ganglia: it's not just about HPC any more. While it is extremely useful for clusters, grids and clouds, it is also quite suitable for a mixed network of web servers, mail servers, databases and all the other applications you may find in a small business, education or ISP environment. Instantly up and running with packages One of the most compelling features, even for small sites with less than 10 nodes, is the ease of installation: install the packages on Debian, Ubuntu, Fedora, OpenCSW and some other platforms, and it just works. Ganglia nodes will find each other over multicast, instantly, no manual configuration changes necessary. On one of the nodes, the web interface must be installed for viewing the statistics. Dare I say it: it is so easy, you hardly even need the book for a small installation. Where the book is really compelling is if you have hundreds or thousands of nodes, if you want custom charts or custom metrics or anything else beyond just installing the package. If monitoring is more than 10% of your job, the book is probably a must-have. Excellent open source architecture Ganglia's simplicity is largely thanks to the way it leverages other open source projects such as Tobi Oetiker's RRDtool and PHP Anybody familiar with these tools will find Ganglia is particularly easy to work with and customise. Custom metrics: IO service times One of my own contributions to the project has been the creation of ganglia-modules-linux, some plugins for Linux-specific metrics and ganglia-modules-solaris providing some similar metrics for Solaris. These projects on github provide an excellent base for people to fork and implement their own custom metrics in C or C++ The book provides a more detailed account of how to work with the various APIs for Python, C/C++, gmetric (command line/shell scripts) and Java. The new web interface For people who had tried earlier versions of Ganglia (and for those people who installed versions < 3.3.0 and still haven't updated), the new web interface is a major improvement and well worth the effort to install. It is available on the most recent packages (for example, it is in Debian 7 (wheezy) but not in Debian 6.) It was originally promoted as a standalone project (code-named gweb2) but was adopted as the official Ganglia web interface around the release of Ganglia 3.3.0. This web page provides a useful overview of what has changed and here is the original release announcement.

23 May 2013

Clint Adams: Oh, free software

I want a CLI WebDAV client that's better than cadaver or hdav. I want a program that can sync an .ics file with a CalDAV server, by dividing it up into events and individually synchronizing each of those. I don't have a clue how deletions would be handled, but that would be nice too. Then I want a program that can synchronize the .ics file with org-mode files. I want a SIP client that works as well as Twinkle but has the architecture of SFLphone or is a library upon which an arbitrary UI can be constructed. I want an HTML-rendering library that has callbacks or hooks for security- and privacy-relevant things like cookies and SSL certificates. I want at least one browser built on this library. I want it to support vi-like keybindings. I want an HTTP(S) proxy that can be dynamically-configured per-client or per-connection through a standardized protocol that web browsers or their plugins can speak. I want it to be able to handle all the relevant things covered by AdBlock Plus, RequestPolicy, and NoScript. I want HTTPS authentication through Monkeysphere and mod_gnutls. I want a git-annex backend for Ogg Vorbis files that treat the audio streams independently of the metadata yet stores them together in the same file so that everything behaves as usual but the annex doesn't bloat by 400Go after I run beets. I want a file transfer queuing system that can work over any sort of transport mechanism, direct or asynchronous, that handles partial transfers and throttling, and is generally magical. I want all kinds of accounting software improvements. I want sane PBX software. I want an OpenStack that doesn't use libvirt for KVM. I want backup software that behaves some weird hybrid of BoxBackup and Dirvish. I want a peer-to-peer card- and board-game platform that uses cryptographic assurance. I want everyone to use YAML instead of XML. I want a phone that's not running a doomed operating system. I want lots of other stuff.

20 February 2013

Vincent Bernat: lldpd 0.7.1

A few weeks ago, a new version of lldpd, a 802.1AB (aka LLDP) implementation for various Unices, has been released. LLDP is an industry standard protocol designed to supplant proprietary Link-Layer protocols such as EDP or CDP. The goal of LLDP is to provide an inter-vendor compatible mechanism to deliver Link-Layer notifications to adjacent network devices. In short, LLDP allows you to know exactly on which port is a server (and reciprocally). To illustrate its use, I have made a xkcd-like strip: xkcd-like strip for the use of LLDP If you would like more information about lldpd, please have a look at its new dedicated website. This blog post is an insight of various technical changes that have affected lldpd since its latest major release one year ago. Lots of C stuff ahead!

Version & changelog UPDATED: Guillem Jover told me how he met the same goals for libbsd :
  1. Save the version from git into .dist-version and use this file if it exists. This allows one to rebuild ./configure from the published tarball without losing the version. This also handles Thorsten Glaser s critic.
  2. Include CHANGELOG in DISTCLEANFILES variable.
Since this is a better solution, I have adopted the appropriate line of codes from libbsd. The two following sections are partly technically outdated.

Automated version In configure.ac, I was previously using a static version number that I had to increase when releasing:
AC_INIT([lldpd], [0.5.7], [bernat@luffy.cx])
Since the information is present in the git tree, this seems a bit redundant (and easy to forget). Taking the version from the git tree is easy:
AC_INIT([lldpd],
        [m4_esyscmd_s([git describe --tags --always --match [0-9]* 2> /dev/null   date +%F])],
        [bernat@luffy.cx])
If the head of the git tree is tagged, you get the exact tag (0.7.1 for example). If it is not, you get the nearest one, the number of commits since it and part of the current hash (0.7.1-29-g2909519 for example). The drawback of this approach is that if you rebuild configure from the released tarball, you don t have the git tree and the version will be a date. Just don t do that.

Automated changelog Generating the changelog from git is a common practice. I had some difficulties to make it right. Here is my attempt (I am using automake):
dist_doc_DATA = README.md NEWS ChangeLog
.PHONY: $(distdir)/ChangeLog
dist-hook: $(distdir)/ChangeLog
$(distdir)/ChangeLog:
        $(AM_V_GEN)if test -d $(top_srcdir)/.git; then \
          prev=$$(git describe --tags --always --match [0-9]* 2> /dev/null) ; \
          for tag in $$(git tag   grep -E '^[0-9]+(\.[0-9]+) 1, $$'   sort -rn); do \
            if [ x"$$prev" = x ]; then prev=$$tag ; fi ; \
            if [ x"$$prev" = x"$$tag" ]; then continue; fi ; \
            echo "$$prev [$$(git log $$prev -1 --pretty=format:'%ai')]:" ; \
            echo "" ; \
            git log --pretty=' - [%h] %s (%an)' $$tag..$$prev ; \
            echo "" ; \
            prev=$$tag ; \
          done > $@ ; \
        else \
          touch $@ ; \
        fi
ChangeLog:
        touch $@
Changelog entries are grouped by version. Since it is a bit verbose, I still maintain a NEWS file with important changes.

Core

C99 I have recently read 21st Century C which has some good bits and also handles the ecosystem around C. I have definitively adopted designated initializers in my coding style. Being a GCC extension since a long time, this is not a major compatibility problem. Without designated initializers:
struct netlink_req req;
struct iovec iov;
struct sockaddr_nl peer;
struct msghdr rtnl_msg;
memset(&req, 0, sizeof(req));
memset(&iov, 0, sizeof(iov));
memset(&peer, 0, sizeof(peer));
memset(&rtnl_msg, 0, sizeof(rtnl_msg));
req.hdr.nlmsg_len = NLMSG_LENGTH(sizeof(struct rtgenmsg));
req.hdr.nlmsg_type = RTM_GETLINK;
req.hdr.nlmsg_flags = NLM_F_REQUEST   NLM_F_DUMP;
req.hdr.nlmsg_seq = 1;
req.hdr.nlmsg_pid = getpid();
req.gen.rtgen_family = AF_PACKET;
iov.iov_base = &req;
iov.iov_len = req.hdr.nlmsg_len;
peer.nl_family = AF_NETLINK;
rtnl_msg.msg_iov = &iov;
rtnl_msg.msg_iovlen = 1;
rtnl_msg.msg_name = &peer;
rtnl_msg.msg_namelen = sizeof(struct sockaddr_nl);
With designated initializers:
struct netlink_req req =  
    .hdr =  
        .nlmsg_len = NLMSG_LENGTH(sizeof(struct rtgenmsg)),
        .nlmsg_type = RTM_GETLINK,
        .nlmsg_flags = NLM_F_REQUEST   NLM_F_DUMP,
        .nlmsg_seq = 1,
        .nlmsg_pid = getpid()  ,
    .gen =   .rtgen_family = AF_PACKET  
 ;
struct iovec iov =  
    .iov_base = &req,
    .iov_len = req.hdr.nlmsg_len
 ;
struct sockaddr_nl peer =   .nl_family = AF_NETLINK  ;
struct msghdr rtnl_msg =  
    .msg_iov = &iov,
    .msg_iovlen = 1,
    .msg_name = &peer,
    .msg_namelen = sizeof(struct sockaddr_nl)
 ;

Logging Logging in lldpd was not extensive. Usually, when receiving a bug report, I asked the reporter to add some additional printf() calls to determine where the problem was. This was clearly suboptimal. Therefore, I have added many log_debug() calls with the ability to filter out some of them. For example, to debug interface discovery, one can run lldpd with lldpd -ddd -D interface. Moreover, I have added colors when logging to a terminal. This may seem pointless but it is now far easier to spot warning messages from debug ones. logging output of lldpd

libevent In lldpd 0.5.7, I was using my own select()-based event loop. It worked but I didn t want to grow a full-featured event loop inside lldpd. Therefore, I switched to libevent. The minimal required version of libevent is 2.0.5. A convenient way to check the changes in API is to use Upstream Tracker, a website tracking API and ABI changes for various libraries. This version of libevent is not available in many stable distributions. For example, Debian Squeeze or Ubuntu Lucid only have 1.4.13. I am also trying to keep compatibility with very old distributions, like RHEL 2, which does not have a packaged libevent at all. For some users, it may be a burden to compile additional libraries. Therefore, I have included libevent source code in lldpd source tree (as a git submodule) and I am only using it if no suitable system libevent is available. Have a look at m4/libevent.m4 and src/daemon/Makefile.am to see how this is done.

Client

Serialization lldpctl is a client querying lldpd to display discovered neighbors. The communication is done through an Unix socket. Each structure to be serialized over this socket should be described with a string. For example:
#define STRUCT_LLDPD_DOT3_MACPHY "(bbww)"
struct lldpd_dot3_macphy  
        u_int8_t                 autoneg_support;
        u_int8_t                 autoneg_enabled;
        u_int16_t                autoneg_advertised;
        u_int16_t                mau_type;
 ;
I did not want to use stuff like Protocol Buffers because I didn t want to copy the existing structures to other structures before serialization (and the other way after deserialization). However, the serializer in lldpd did not allow to handle reference to other structures, lists or circular references. I have written another one which works by annotating a structure with some macros:
struct lldpd_chassis  
    TAILQ_ENTRY(lldpd_chassis) c_entries;
    u_int16_t        c_index;
    u_int8_t         c_protocol;
    u_int8_t         c_id_subtype;
    char            *c_id;
    int              c_id_len;
    char            *c_name;
    char            *c_descr;
    u_int16_t        c_cap_available;
    u_int16_t        c_cap_enabled;
    u_int16_t        c_ttl;
    TAILQ_HEAD(, lldpd_mgmt) c_mgmt;
 ;
MARSHAL_BEGIN(lldpd_chassis)
MARSHAL_TQE  (lldpd_chassis, c_entries)
MARSHAL_FSTR (lldpd_chassis, c_id, c_id_len)
MARSHAL_STR  (lldpd_chassis, c_name)
MARSHAL_STR  (lldpd_chassis, c_descr)
MARSHAL_SUBTQ(lldpd_chassis, lldpd_mgmt, c_mgmt)
MARSHAL_END;
Only pointers need to be annotated. The remaining of the structure can be serialized with just memcpy()1. I think there is still room for improvement. It should be possible to add annotations inside the structure and avoid some duplication. Or maybe, using a C parser? Or using the AST output from LLVM?

Library In lldpd 0.5.7, there are two possible entry points to interact with the daemon:
  1. Through SNMP support. Only information available in LLDP-MIB are exported. Therefore, implementation-specific values are not available. Moreover, SNMP support is currently read-only.
  2. Through lldpctl. Thanks to a contribution from Andreas Hofmeister, the output can be requested to be formatted as an XML document.
Integration of lldpd into a network stack was therefore limited to one of those two channels. As an exemple, you can have a look at how Vyatta made the integration using the second solution. To provide a more robust solution, I have added a shared library, liblldpctl, with a stable and well-defined API. lldpctl is now using it. I have followed those directions2:
  • Consistent naming (all exported symbols are prefixed by lldpctl_). No pollution of the global namespace.
  • Consistent return codes (on errors, all functions returning pointers are returning NULL, all functions returning integers are returning -1).
  • Reentrant and thread-safe. No global variables.
  • One well-documented include file.
  • Reduce the use of boilerplate code. Don t segfault on NULL, accept integer input as string, provide easy iterators,
  • Asynchronous API for input/output. The library delegates reading and writing by calling user-provided functions. Those functions can yield their effects. In this case, the user has to callback the library when data is available for reading or writing. It is therefore possible to integrate the library with any existing event-loop. A thin synchronous layer is provided on top of this API.
  • Opaque types with accessor functions.
Accessing bits of information is done through atoms which are opaque containers of type lldpctl_atom_t. From an atom, you can extract some properties as integers, strings, buffers or other atoms. The list of ports is an atom. A port in this list is also an atom. The list of VLAN present on this port is an atom, as well as each VLAN in this list. The VLAN name is a NULL-terminated string living in the scope of an atom. Accessing a property is done by a handful of functions, like lldpctl_atom_get_str(), using a specific key. For example, here is how to display the list of VLAN assuming you have one port as an atom:
vlans = lldpctl_atom_get(port, lldpctl_k_port_vlans);
lldpctl_atom_foreach(vlans, vlan)  
    vid = lldpctl_atom_get_int(vlan,
                               lldpctl_k_vlan_id));
    name = lldpctl_atom_get_str(vlan,
                                lldpctl_k_vlan_name));
    if (vid && name)
        printf("VLAN %d: %s\n", vid, name);
 
lldpctl_atom_dec_ref(vlans);
Internally, an atom is typed and reference counted. The size of the API is greatly limited thanks to this concept. There are currently more than one hundred pieces of information that can be retrieved from lldpd. Ultimately, the library will also enable the full configuration of lldpd. Currently, many aspects can only be configured through command-line flags. The use of the library does not replace lldpctl which will still be available and be the primary client of the library.

CLI Having a configuration file was requested since a long time. I didn t want to include a parser in lldpd: I am trying to keep it small. It was already possible to configure lldpd through lldpctl. Locations, network policies and power policies were the three items that could be configured this way. So, the next step was to enable lldpctl to read a configuration file, parse it and send the result to lldpd. As a bonus, why not provide a full CLI accepting the same statements with inline help and completion?

Parsing & completion Because of completion, it is difficult to use a YACC generated parser. Instead, I define a tree where each node accepts a word. A node is defined with this function:
struct cmd_node *commands_new(
    struct cmd_node *,
    const char *,
    const char *,
    int(*validate)(struct cmd_env*, void *),
    int(*execute)(struct lldpctl_conn_t*, struct writer*,
        struct cmd_env*, void *),
    void *);
A node is defined by:
  • its parent,
  • an optional accepted static token,
  • an help string,
  • an optional validation function and
  • an optional function to execute if the current token is accepted.
When walking the tree, we maintain an environment which is both a key-value store and a stack of positions in the tree. The validation function can check the environment to see if we are in the right context (we want to accept the keyword foo only once, for example). The execution function can add the current token as a value in the environment but it can also pop the current position in the tree to resume walk from a previous node. As an example, see how nodes for configuration of a coordinate-based location are registered:
/* Our root node */
struct cmd_node *configure_medloc_coord = commands_new(
    configure_medlocation,
    "coordinate", "MED location coordinate configuration",
    NULL, NULL, NULL);
/* The exit node.
   The validate function will check if we have both
   latitude and longitude. */
commands_new(configure_medloc_coord,
    NEWLINE, "Configure MED location coordinates",
    cmd_check_env, cmd_medlocation_coordinate,
    "latitude,longitude");
/* Store latitude. Once stored, we pop two positions
   to go back to the "root" node. The user can only
   enter latitude once. */
commands_new(
    commands_new(
        configure_medloc_coord,
        "latitude", "Specify latitude",
        cmd_check_no_env, NULL, "latitude"),
    NULL, "Latitude as xx.yyyyN or xx.yyyyS",
    NULL, cmd_store_env_value_and_pop2, "latitude");
/* Same thing for longitude */
commands_new(
    commands_new(
        configure_medloc_coord,
        "longitude", "Specify longitude",
        cmd_check_no_env, NULL, "longitude"),
    NULL, "Longitude as xx.yyyyE or xx.yyyyW",
    NULL, cmd_store_env_value_and_pop2, "longitude");
The definition of all commands is still a bit verbose but the system is simple enough yet powerful enough to cover all needed cases.

Readline When faced with a CLI, we usually expect some perks like completion, history handling and help. The most used library to provide such features is the GNU Readline Library. Because this is a GPL library, I have first searched an alternative. There are several of them: From an API point of view, the first three libraries support the GNU Readline API. They also have a common native API. Moreover, this native API also handles tokenization. Therefore, I have developed the first version of the CLI with this API3. Unfortunately, I noticed later this library is not very common in the Linux world and is not available in RHEL. Since I have used the native API, it was not possible to fallback to the GNU Readline library. So, let s switch! Thanks to the appropriate macro from the Autoconf Archive (with small modifications), the compilation and linking differences between the libraries are taken care of. Because GNU Readline library does not come with a tokenizer, I had to write one myself. The API is also badly documented and it is difficult to know which symbol is available in which version. I have limited myself to:
  • readline(), addhistory(),
  • rl_insert_text(),
  • rl_forced_update_display(),
  • rl_bind_key()
  • rl_line_buffer and rl_point.
Unfortunately, the various libedit libraries have a noop for rl_bind_key(). Therefore, completion and online help is not available with them. I have noticed that most BSD come with GNU Readline library preinstalled, so it could be considered as a system library. Nonetheless, linking with libedit to avoid licensing issues is possible and help can be obtained by prefixing the command with help.

OS specific support

BSD support Until version 0.7, lldpd was Linux-only. The rewrite to use Netlink was the occasion to abstract interfaces and to port to other OS. The first port was for Debian GNU/kFreeBSD, then for FreeBSD, OpenBSD and NetBSD. They all share the same source code:
  • getifaddrs() to get the list of interfaces,
  • bpf(4) to attach to an interface to receive and send packets,
  • PF_ROUTE socket to be notified when a change happens.
Each BSD has its own ioctl() to retrieve VLAN, bridging and bonding bits but they are quite similar. The code was usually adapted from ifconfig.c. The BSD ports have the same functionalities than the Linux port, except for NetBSD which lacks support for LLDP-MED inventory since I didn t find a simple way to retrieve DMI related information. They also offer greater security by filtering packets sent. Moreover, OpenBSD allows to lock the filters set on the socket:
/* Install write filter (optional) */
if (ioctl(fd, BIOCSETWF, (caddr_t)&fprog) < 0)  
    rc = errno;
    log_info("privsep", "unable to setup write BPF filter for %s",
        name);
    goto end;
 
/* Lock interface */
if (ioctl(fd, BIOCLOCK, (caddr_t)&enable) < 0)  
    rc = errno;
    log_info("privsep", "unable to lock BPF interface %s",
        name);
    goto end;
 
This is a very nice feature. lldpd is using a privileged process to open the raw socket. The socket is then transmitted to an unprivileged process. Without this feature, the unprivileged process can remove the BPF filters. I have ported the ability to lock a socket filter program to Linux. However, I still have to add a write filter.

OS X support Once FreeBSD was supported, supporting OS X seemed easy. I got sponsored by xcloud.me which provided a virtual Mac server. Making lldpd work with OS X took only two days, including a full hour to guess how to get Apple Xcode without providing a credit card. To help people installing lldpd on OS X, I have also written a lldpd formula for Homebrew which seems to be the most popular package manager for OS X.

Upstart and systemd support Many distributions propose upstart and systemd as a replacement or an alternative for the classic SysV init. Like most daemons, lldpd detaches itself from the terminal and run in the background, by forking twice, once it is ready (for lldpd, this just means we have setup the control socket). While both upstart and systemd can accommodate daemons that behave like this, it is recommended to not fork. How to advertise readiness in this case? With upstart, lldpd will send itself the SIGSTOP signal. upstart will detect this, resume lldpd with SIGCONT and assume it is ready. The code to support upstart is therefore quite simple. Instead of calling daemon(), do this:
const char *upstartjob = getenv("UPSTART_JOB");
if (!(upstartjob && !strcmp(upstartjob, "lldpd")))
    return 0;
log_debug("main", "running with upstart, don't fork but stop");
raise(SIGSTOP);
The job configuration file looks like this:
# lldpd - LLDP daemon
description "LLDP daemon"
start on net-device-up IFACE=lo
stop on runlevel [06]
expect stop
respawn
script
  . /etc/default/lldpd
  exec lldpd $DAEMON_ARGS
end script
systemd provides a socket to achieve the same goal. An application is expected to write READY=1 to the socket when it is ready. With the provided library, this is just a matter of calling sd_notify("READY=1\n"). Since sd_notify() has less than 30 lines of code, I have rewritten it to avoid an external dependency. The appropriate unit file is:
[Unit]
Description=LLDP daemon
Documentation=man:lldpd(8)
[Service]
Type=notify
NotifyAccess=main
EnvironmentFile=-/etc/default/lldpd
ExecStart=/usr/sbin/lldpd $DAEMON_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target

OS include files Linux-specific include files were a major pain in previous versions of lldpd. The problems range from missing header files (like linux/if_bonding.h) to the use of kernel-only types. Those headers have a difficult history. They were first shipped with the C library but were rarely synced and almost always outdated. They were then extracted from kernel version with almost no change and lagged behind the kernel version used by the released distribution4. Today, the problem is acknowledged and is being solved by both the distributions which extract the headers from the packaged kernel and by kernel developers with a separation of kernel-only headers from user-space API headers. However, we still need to handle legacy. A good case is linux/ethtool.h:
  • It can just be absent.
  • It can use u8, u16 types which are kernel-only types. To work around this issue, type munging can be setup.
  • It can miss some definition, like SPEED_10000. In this case, you either define the missing bits and find yourself with a long copy of the original header interleaved with #ifdef or conditionally use each symbol. The latest solution is a burden by itself but it also hinders some functionalities that can be available in the running kernel.
The easy solution to all this mess is to just include the appropriate kernel headers into the source tree of the project. Thanks to Google ripping them for its Bionic C library, we know that copying kernel headers into a program does not create a derivative work.

  1. Therefore, the use of u_int16_t and u_int8_t types is a left-over of the previous serializer where the size of all members was important.
  2. For more comprehensive guidelines, be sure to check Writing a C library.
  3. Tokenization is not the only advantage of libedit native API. The API is also cleaner, does not have a global state and has a better documentation. All the implementations are also BSD licensed.
  4. For example, in Debian Sarge, the Linux kernel was a 2.6.8 (2004) while the kernel headers were extracted from some pre-2.6 kernel.

Keith Packard: DRI3000

DRI3000 Even Better Direct Rendering This all started with the presentation that Eric Anholt and I did at the 2012 X developers conference, and subsequently wrote about in my DRI-Next posting. That discussion sketched out the goals of changing the existing DRI2-based direct rendering infrastructure. Last month, I gave a more detailed presentation at Linux.conf.au 2013 (the best free software conference in the world). That presentation was recorded, so you can watch it online. Or, you can read Nathan Willis summary at lwn.net. That presentation contained a lot more details about the specific techniques that will be used to implement the new system, in particular it included some initial indications of what kind of performance benefits the overall system might be able to produce. I sat down today and wrote down an initial protocol definition for two new extensions (because two extensions are always better than one). Together, these are designed to provide complete support for direct rendering APIs like OpenGL and offer a better alternative to DRI2. The DRI3 extension Dave Airlie and Eric Anholt refused to let me call either actual extension DRI3000, so the new direct rendering extension is called DRI3. It uses POSIX file descriptor passing to share kernel objects between the X server and the application. DRI3 is a very small extension in three requests:
  1. Open. Returns a file descriptor for a direct rendering device along with the name of the driver for a particular API (OpenGL, Video, etc).
  2. PixmapFromBuffer. Takes a kernel buffer object (Linux uses DMA-BUF) and creates a pixmap that references it. Any place a Pixmap can be used in the X protocol, you can now talk about a DMA-BUF object. This allows an application to do direct rendering, and then pass a reference to those results directly to the X server.
  3. BufferFromPixmap. This takes an existing pixmap and returns a file descriptor for the underlying kernel buffer object. This is needed for the GL Texture from Pixmap extension.
For OpenGL, the plan is to create all of the buffer objects on the client side, then pass the back buffer to the X server for display on the screen. By creating pixmaps, we avoid needing new object types in the X server and can use existing X apis that take pixmaps for these objects. The Swap extension Once you ve got direct rendered content in a Pixmap, you ll want to display it on the screen. You could simply use CopyArea from the pixmap to a window, but that isn t synchronzied to the vertical retrace signal. And, the semantics of the CopyArea operation precludes us from swapping the underlying buffers around, making it more expensive than strictly necessary. The Swap extension fills those needs. Because the DRI3 extension provides an X pixmap reference to the direct rendered content, the Swap extension doesn t need any new object types for its operation. Instead, it talks strictly about core X objects, using X pixmaps as the source of the new data and X drawables as the destination. The core of the Swap extension is one request SwapRegion. This request moves pixels from a pixmap to a drawable. It uses an X fixes Region object to specify the area of the destination being painted, and an offset within the source pixmap to align the two areas. A bunch of data are included in the reply from the SwapRegion request. First, you get a 64-bit sequence number identifying the swap itself. Then, you get a suggested geometry for the next source pixmap. Using the suggested geometry may result in performance improvements from the techniques described in the LCA talk above. The last bit of data included in the SwapRegion reply is a list of pixmaps which were used as source operands to earlier SwapRegion requests to the same drawable. Each pixmap is listed along with the 64-bit sequence number associated with an earlier SwapRegion operation which resulted in the contents which the pixmap now contains. Ok, so that sounds really confusing. Some examples are probably necessary. I m hoping you ll be able to tell that in both cases, the idle swap count tries to name the swap sequence at which time the destination drawable contained the contents currently in the pixmap. Note that even if the SwapRegion is implemented as a Copy operation, the provided source pixmap may not be included in the idle list as the copy may be delayed to meet the synchronization requirements specfied by the client. Finally, if you want to throttle rendering based upon when frames appear on the screen, Swap offers an event that can be delivered to the drawable after the operation actually takes place. Because the Swap extension needs to supply all of the OpenGL SwapBuffers semantics (including a multiplicity of OpenGL extensions related to that), I ve stolen a handful of DRI2 requests to provide the necessary bits for that:
  1. SwapGetMSC
  2. SwapWaitMSC
  3. SwapWaitSBC
These work just like the DRI2 requests of the same names. Current State of the Extensions Both of these extensions have an initial protocol specification written down and stored in git:
  1. DRI3 protocol
  2. Swap protocol

6 November 2012

Russ Allbery: Review: Cerebus

Review: Cerebus, by Dave Sim
Series: Cerebus #1
Publisher: Aardvark-Vanaheim
Copyright: August 1987
Printing: July 2003
ISBN: 0-919359-08-6
Format: Graphic novel
Pages: 546
Cerebus is something of a legend in comics. Begun in December of 1977 by Dave Sim, it was one of the first entirely independent, self-published comics in a field dominated by the large work-for-hire companies like Marvel and DC. It ran for 300 issues and nearly 27 years and became one of the most influential independent comic books of all time, in part due to Sim's outspoken views in favor of creator rights and his regular use of the editorial pages in Cerebus issues to air those views. This collection (the first "phonebook") collects issues 1 through 25, with one of the amazing wrap-around covers that makes all of the phonebooks so beautiful (possibly partly by later Cerebus collaborator Gerhard, although if so it's uncredited so far as I can tell). Cerebus reliably has some of the best black-and-white art you will ever see in comics. There is some debate over where to start with Cerebus, and a faction that, for good reasons, argues for starting with the second phonebook (High Society). While these first twenty-five issues do introduce the reader to a bunch of important characters (Elrod, Lord Julius, Jaka, Artemis Roach, and Suenteus Po, for example), all those characters are later reintroduced and nothing that happens here is hugely vital for the overall story. It's also quite rough, starting as Conan parody with almost no depth. The first half or so of this collection features lots of short stories with little or no broader significance, and the early ones are about little other than Cerebus's skills and fighting abilities. That said, when reading the series, I like to start at the beginning. It is nice to follow the characters from their moment of first introduction, and it's delightful to watch Sim's ability grow (surprisingly quickly) through the first few issues. Cerebus #1 is bad: crude, simplistic artwork, almost nothing in the way of a story, and lots of purple narration. But flipping forward even to Cerebus #6 (the first appearance of Jaka), one sees a remarkable difference. By Cerebus #7, Cerebus looks like himself, the plot is getting more complex, and Sim is clearly hitting his stride. And, by the end of this collection, the art has moved from crude past competent and into truly beautiful in places. It's one of the few black-and-white comics where I never miss color. The detailed line work is more enjoyable than I think any coloring could be. The strength of Cerebus as an ongoing character slowly emerges from behind the parody. What I like the most about Cerebus is that he's neither a predestined victor (apart from the early issues that follow the Conan model most closely) nor a pure loner who stands apart from the world. He gets embroiled in political affairs, but almost always for his own reasons (primarily wealth). He has his own moral code, but it's fluid and situational; it's the realistic muddle of impulse and vague principle that most of us fall back on in our everyday life, which is remarkably unlike the typical moral code in comics (or even fiction in general). And while he is in one sense better and more powerful than anyone else in the story, that doesn't mean Cerebus gets what he wants. Most stories here end up going rather poorly for him, forcing daring escapes or frustrating cutting of losses. Sim quickly finds a voice for Cerebus that's irascible, wise, practical, and a bit world-weary, as well as remarkably unflappable. He's one of the best protagonists in comics, and that's already clear by the end of this collection. Parody is the focus of these first issues, which is a mixed bag. The early issues are fairly weak sword-and-sorcery parody (particularly Red Sonja, primarily a vehicle for some tired sexist jokes) and worth reading only for the development in Sim's art style and the growth of Cerebus as a unique voice. Sim gets away from straight parody for the middle of the collection, but then makes an unfortunate return for the final few issues, featuring parodies of Man-Thing and X-Men that I thought were more forced than funny. You have to have some tolerance for this, and (similar to early Pratchett) a lot of it isn't as funny as the author seems to think it is. That said, three of Sim's most brilliant ongoing characters are parodies, just ones that are mixed and inserted into the "wrong" genres in ways that bring them alive. Elrod of Melvinbone, a parody of Moorcock's Elric of Melnibone who speaks exactly like Foghorn Leghorn, should not work and yet does. He's the source of the funniest moments in this collection. His persistant treatment of Cerebus as a kid in a bunny suit shouldn't be as funny as it is, but it reliably makes me laugh each time I re-read this collection. Lord Julius is a straight insertion of Groucho Marx who really comes into his own in the next collection, High Society, but some of the hilarious High Society moments are foreshadowed here. And Artemis Roach, who starts as a parody of Batman and will later parody a huge variety of comic book characters, provides several delightful moments with Cerebus as straight man. I'm not much of a fan of parody, but I still think Cerebus is genuinely funny. High Society is definitely better, but I think one would miss some great bits by skipping over the first collection. Much of what makes it work is the character of Cerebus, who is in turn a wonderful straight man for Sim's wilder characters and an endless source of sharp one-liners. It's easy to really care about and root for Cerebus, even when he's being manipulative and amoral, because he's so straightforward and forthright about it. The world Sim puts him into is full of chaos, ridiculousness, and unfairness, and Cerebus is the sort of character to put his head down, make a few sarcastic comments, and then get on with it. It's fun to watch. One final note: I've always thought the "phonebook" collections were one of Sim's best ideas. Unlike nearly all comic book collections, a Cerebus phonebook provides enough material to be satisfying and has always felt like a good value for the money. I wish more comic book publishers would learn from Sim's example and produce larger collections that aren't hardcover deluxe editions (although Sim has an admitted advantage from not having to reproduce color). Followed by High Society. Rating: 7 out of 10

Next.

Previous.